Distributed Linearly Separable Computation

نویسندگان

چکیده

This paper formulates a distributed computation problem, where master asks ${\mathsf N}$ workers to compute linearly separable function. The task function can be expressed as K}_{\mathrm{ c}}$ linear combinations of K}$ messages, each message is one dataset. Our objective find the optimal tradeoff between cost (number uncoded datasets assigned worker) and communication symbols must download), such that from answers any N}_{\mathrm{ r}}$ out recover with high probability, coefficients are uniformly i.i.d. over some large enough finite field. formulated problem seen generalized version existing problems, gradient coding transform. In this paper, we consider specific case minimum, propose novel achievability schemes converse bounds for cost. Achievability coincide system parameters; when they do not match, prove achievable computing scheme under constraint widely used ‘cyclic assignment’ on datasets. results also show K}= {\mathsf , same proposed by Tandon et al. which recovers combination our let additional r}}-1$ independent messages probability.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Linearly Separable Boolean Function

The maximum absolute value of integral weights sufficient to represent any linearly separable Boolean function is investigated. It is shown that upper bounds exhibited by Muroga (1971) for rational weights satisfying the normalized System of inequalities also hold for integral weights. Therewith, the previous best known upper bound for integers is improved by approximately a factor of 1/2.

متن کامل

Learning Linearly Separable Languages

For a finite alphabet A, we define a class of embeddings of A∗ into an infinite-dimensional feature space X and show that its finitely supported hyperplanes define regular languages. This suggests a general strategy for learning regular languages from positive and negative examples. We apply this strategy to the piecewise testable languages, presenting an embedding under which these are precise...

متن کامل

MinSVM for Linearly Separable and Imbalanced Datasets

Class imbalance (CI) is common in most non synthetic datasets, which presents a major challenge for many classification algorithms geared towards optimized overall accuracy whenever the minority class risk loss is often higher than the majority class one. Support vector machine (SVM), a machine learning (ML) technique deeply rooted in statistics, maximizes linear margins between classes and gen...

متن کامل

Selection of the Linearly Separable Feature Subsets

We address a situation when more than one feature subset allows for linear separability of given data sets. Such situation can occur if a small number of cases is represented in a highly dimensional feature space. The method of the feature selection based on minimisation of a special criterion function is here analysed. This criterion function is convex and piecewise-linear (CPL). The proposed ...

متن کامل

Optimal CNN Templates for Linearly-Separable One-Dimensional Cellular Automata

In this tutorial, we present optimal Cellular Nonlinear Network (CNN) templates for implementing linearly-separable one-dimensional (1-D) Cellular Automata (CA). From the gallery of CNN templates presented in this paper, one can calculate any of the 256 1-D CA Rules studied by Wolfram using a CNN Universal Machine chip that is several orders of magnitude faster than conventional programming on ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2022

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2021.3127910